Saturday, July 12, 2008

Engineering scientific software

Some pointers from Greg Wilson on engineering scientific software.


The first pointer is to the "First International Workshop on Software Engineering for Computational Science and Engineering".


A couple of Wilson's short summaries of papers from this workshop
(from recent research reading)

Diane Kelly and Rebecca Sanders: “Assessing the Quality of Scientific Software” (SE-CSE workshop, 2008). A short, useful summary of scientists’ quality assurance practices. As many people have noted, most commercial tools aren’t helpful when (a) cosmetic rearrangement of the code changes the output, and (b) the whole reason you’re writing a program is that you can’t figure out what the answer ought to be.


Judith Segal: “Models of Scientific Software Development”. Segal has spent the last few years doing field studies of scientists building software, and based on those has developed a model of their processes that is distinct in several ways from both classical and agile models.


What I found interesting:

Kelly and Sanders suggest that the split between verification and validation is not useful, and propose 'assessment' be used as an umbrella term. (I will return to the point in a later post)



The second pointer is to the presentations at TACC Scientific Software Days.


One of the presentations there was by Greg Wilson himself on "HPC Considered Harmful". There is one slide on testing floating point code (slide 15). In particular the bullet point testing against a tolerance is "superstition, not science". (And he's raised this point elsewhere)


This statement has bothered me since I first read it, and so the next few posts will explore some of context surrounding this question. The first post will list different methods for assessing the correctness of a program, the next post will categorize possible errors, and finally a post looking at 'close enough'.

1 comment:

Hank hendricks said...
This comment has been removed by a blog administrator.